258 research outputs found

    Learning a spin glass: determining Hamiltonians from metastable states

    Full text link
    We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of NN units from a set of measurements, whose sizes needs to be O(N2){\cal O}(N^2) bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms.Comment: 5 pages, 1 figure, to appear in Physica

    Online Learning with Ensembles

    Full text link
    Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule ff a transform f~\tilde{f} exists, such that a single student using f~\tilde{f} has the same generalization behaviour as an ensemble of ff-students.Comment: 8 pages, 1 figure. Submitted to J.Phys.

    Dynamical transitions in the evolution of learning algorithms by selection

    Get PDF
    We study the evolution of artificial learning systems by means of selection. Genetic programming is used to generate a sequence of populations of algorithms which can be used by neural networks for supervised learning of a rule that generates examples. In opposition to concentrating on final results, which would be the natural aim while designing good learning algorithms, we study the evolution process and pay particular attention to the temporal order of appearance of functional structures responsible for the improvements in the learning process, as measured by the generalization capabilities of the resulting algorithms. The effect of such appearances can be described as dynamical phase transitions. The concepts of phenotypic and genotypic entropies, which serve to describe the distribution of fitness in the population and the distribution of symbols respectively, are used to monitor the dynamics. In different runs the phase transitions might be present or not, with the system finding out good solutions, or staying in poor regions of algorithm space. Whenever phase transitions occur, the sequence of appearances are the same. We identify combinations of variables and operators which are useful in measuring experience or performance in rule extraction and can thus implement useful annealing of the learning schedule.Comment: 11 pages, 11 figures, 2 table

    Stability diagrams for bursting neurons modeled by three-variable maps

    Full text link
    We study a simple map as a minimal model of excitable cells. The map has two fast variables which mimic the behavior of class I neurons, undergoing a sub-critical Hopf bifurcation. Adding a third slow variable allows the system to present bursts and other interesting biological behaviors. Bifurcation lines which locate the excitability region are obtained for different planes in parameter space.Comment: 7 pages, 3 figures, accepted for publicatio

    Gradient descent learning in and out of equilibrium

    Full text link
    Relations between the off thermal equilibrium dynamical process of on-line learning and the thermally equilibrated off-line learning are studied for potential gradient descent learning. The approach of Opper to study on-line Bayesian algorithms is extended to potential based or maximum likelihood learning. We look at the on-line learning algorithm that best approximates the off-line algorithm in the sense of least Kullback-Leibler information loss. It works by updating the weights along the gradient of an effective potential different from the parent off-line potential. The interpretation of this off equilibrium dynamics holds some similarities to the cavity approach of Griniasty. We are able to analyze networks with non-smooth transfer functions and transfer the smoothness requirement to the potential.Comment: 08 pages, submitted to the Journal of Physics

    Functional Optimisation of Online Algorithms in Multilayer Neural Networks

    Full text link
    We study the online dynamics of learning in fully connected soft committee machines in the student-teacher scenario. The locally optimal modulation function, which determines the learning algorithm, is obtained from a variational argument in such a manner as to maximise the average generalisation error decay per example. Simulations results for the resulting algorithm are presented for a few cases. The symmetric phase plateaux are found to be vastly reduced in comparison to those found when online backpropagation algorithms are used. A discussion of the implementation of these ideas as practical algorithms is given
    corecore